Computer Science > Computation and Language
[Submitted on 2 Oct 2019 (v1), last revised 1 Mar 2020 (this version, v4)]
Title:DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter
View PDFAbstract:As Transfer Learning from large-scale pre-trained models becomes more prevalent in Natural Language Processing (NLP), operating these large models in on-the-edge and/or under constrained computational training or inference budgets remains challenging. In this work, we propose a method to pre-train a smaller general-purpose language representation model, called DistilBERT, which can then be fine-tuned with good performances on a wide range of tasks like its larger counterparts. While most prior work investigated the use of distillation for building task-specific models, we leverage knowledge distillation during the pre-training phase and show that it is possible to reduce the size of a BERT model by 40%, while retaining 97% of its language understanding capabilities and being 60% faster. To leverage the inductive biases learned by larger models during pre-training, we introduce a triple loss combining language modeling, distillation and cosine-distance losses. Our smaller, faster and lighter model is cheaper to pre-train and we demonstrate its capabilities for on-device computations in a proof-of-concept experiment and a comparative on-device study.
Submission history
From: Victor Sanh [view email][v1] Wed, 2 Oct 2019 17:56:28 UTC (275 KB)
[v2] Wed, 16 Oct 2019 14:52:02 UTC (275 KB)
[v3] Fri, 24 Jan 2020 16:58:52 UTC (276 KB)
[v4] Sun, 1 Mar 2020 02:57:50 UTC (276 KB)
References & Citations
export BibTeX citation
Loading...
Bibliographic and Citation Tools
Bibliographic Explorer (What is the Explorer?)
Connected Papers (What is Connected Papers?)
Litmaps (What is Litmaps?)
scite Smart Citations (What are Smart Citations?)